我们努力努力探索的任务很少,名为Insbestantial对象检测(IOD),该任务旨在以以下特征定位对象:(1)具有不明显的边界的无定形形状; (2)与周围环境相似; (3)颜色不存在。因此,在单个静态框架中区分不理性对象是更具挑战性的,而空间和时间信息的协作表示至关重要。因此,我们构建了一个由600个视频(141,017帧)组成的iod-video数据集,其中涵盖了各种距离,尺寸,可见性和不同光谱范围捕获的场景。此外,我们为IOD开发了一个时空聚合框架,其中部署了不同的骨架,并精心设计了时空聚合损失(Staloss),以利用沿时轴的一致性来利用一致性。在IOD-VIDEO数据集上进行的实验表明,时空聚集可以显着改善IOD的性能。我们希望我们的工作能够吸引进一步的研究,以完成这项有价值但充满挑战的任务。该代码将在:\ url {https://github.com/calayzhou/iod-video}上可用。
translated by 谷歌翻译
尽管已经对音频驱动的说话的面部生成取得了重大进展,但现有方法要么忽略面部情绪,要么不能应用于任意主题。在本文中,我们提出了情感感知的运动模型(EAMM),以通过涉及情感源视频来产生一次性的情感谈话面孔。具体而言,我们首先提出了一个Audio2Facial-Dynamics模块,该模块从音频驱动的无监督零和一阶密钥点运动中进行说话。然后,通过探索运动模型的属性,我们进一步提出了一个隐性的情绪位移学习者,以表示与情绪相关的面部动力学作为对先前获得的运动表示形式的线性添加位移。全面的实验表明,通过纳入两个模块的结果,我们的方法可以在具有现实情感模式的任意主题上产生令人满意的说话面部结果。
translated by 谷歌翻译
从一组校准的多视图图像中恢复详细的面部几何形状对于其广泛的应用是有价值的。传统的多视图立体声(MVS)方法采用优化方法来规范匹配成本。最近,基于学习的方法将所有这些集成到端到端的神经网络中并显示出效率的优越性。在本文中,我们提出了一种新颖的架构,以在大约10秒内恢复极其详细的3D面。与以前基于学习的方法通过3D CNN规范成本量,我们建议学习用于回归匹配成本的隐式功能。通过从多视图图像拟合3D可变模型,在网格连接的UV空间中提取和聚合多个图像的特征,这使得隐式功能在恢复详细的面部形状中更有效。我们的方法在BACESCape数据集上的大边距精确地表达了基于SOTA学习的MV。代码和数据即将发布。
translated by 谷歌翻译
我们提出了一种参数模型,将自由视图图像映射到编码面部形状,表达和外观的矢量空间,即使用神经辐射场,即可变的面部nerf。具体地,MoFanerf将编码的面部形状,表达和外观以及空间坐标和视图方向作为输入,作为输入到MLP,并输出光学逼真图像合成的空间点的辐射。与传统的3D可变模型(3DMM)相比,MoFanerf在直接综合光学逼真的面部细节方面表现出优势,即使是眼睛,嘴巴和胡须也是如此。而且,通过插入输入形状,表达和外观码,可以容易地实现连续的面部。通过引入特定于特定于特定的调制和纹理编码器,我们的模型合成精确的光度测量细节并显示出强的表示能力。我们的模型显示了多种应用的强大能力,包括基于图像的拟合,随机产生,面部索具,面部编辑和新颖的视图合成。实验表明,我们的方法比以前的参数模型实现更高的表示能力,并在几种应用中实现了竞争性能。据我们所知,我们的作品是基于神经辐射场上的第一款,可用于配合,发电和操作。我们的代码和型号在https://github.com/zhuhao-nju/mofanerf中发布。
translated by 谷歌翻译
在本文中,我们提出了一个大型详细的3D面部数据集,FACESCAPE和相应的基准,以评估单视图面部3D重建。通过对FACESCAPE数据进行训练,提出了一种新的算法来预测从单个图像输入的精心索引3D面模型。 FACESCAPE DataSet提供18,760个纹理的3D面,从938个科目捕获,每个纹理和每个特定表达式。 3D模型包含孔径级面部几何形状,也被处理为拓扑均匀化。这些精细的3D面部模型可以表示为用于详细几何的粗糙形状和位移图的3D可线模型。利用大规模和高精度的数据集,进一步提出了一种使用深神经网络学习特定于表达式动态细节的新颖算法。学习的关系是从单个图像输入的3D面预测系统的基础。与以前的方法不同,我们的预测3D模型在不同表达式下具有高度详细的几何形状。我们还使用FACESCAPE数据来生成野外和实验室内基准,以评估最近的单视面重建方法。报告并分析了相机姿势和焦距的尺寸,并提供了忠诚和综合评估,并揭示了新的挑战。前所未有的数据集,基准和代码已被释放到公众以进行研究目的。
translated by 谷歌翻译
Sensory and emotional experiences such as pain and empathy are essential for mental and physical health. Cognitive neuroscience has been working on revealing mechanisms underlying pain and empathy. Furthermore, as trending research areas, computational pain recognition and empathic artificial intelligence (AI) show progress and promise for healthcare or human-computer interaction. Although AI research has recently made it increasingly possible to create artificial systems with affective processing, most cognitive neuroscience and AI research do not jointly address the issues of empathy in AI and cognitive neuroscience. The main aim of this paper is to introduce key advances, cognitive challenges and technical barriers in computational pain recognition and the implementation of artificial empathy. Our discussion covers the following topics: How can AI recognize pain from unimodal and multimodal information? Is it crucial for AI to be empathic? What are the benefits and challenges of empathic AI? Despite some consensus on the importance of AI, including empathic recognition and responses, we also highlight future challenges for artificial empathy and possible paths from interdisciplinary perspectives. Furthermore, we discuss challenges for responsible evaluation of cognitive methods and computational techniques and show approaches to future work to contribute to affective assistants capable of empathy.
translated by 谷歌翻译
基于学习的3D形状分割通常被配制为语义标记问题,假设训练形状的所有部分都用给定的一组标签注释。然而,这种假设对于学习细粒度的细分来说是不切实际的。虽然大多数现成的CAD模型是由施工组成的细粒度,但它们通常会错过语义标签并标记那些细粒度的部分非常乏味。我们接近深群体的问题,其中关键的想法是从带有细粒度分割的形状数据集中学习部分前导者,但没有部分标签。给定点采样3D形状,我们通过相似矩阵模拟点的聚类前沿,通过最小化新的低级损失来实现部分分割。为了处理高度密集的采样点集,我们采用了分裂和征服策略。我们将大点分区设置为多个块。每个块使用以类别 - 不可知方式培训的基于深度基于集群的基于网络的部分进行分段。然后,我们会培训图形卷积网络以合并所有块的段以形成最终的分段结果。我们的方法是用细粒细分的具有挑战性的基准进行评估,显示出最先进的性能。
translated by 谷歌翻译
Decompilation aims to transform a low-level program language (LPL) (eg., binary file) into its functionally-equivalent high-level program language (HPL) (e.g., C/C++). It is a core technology in software security, especially in vulnerability discovery and malware analysis. In recent years, with the successful application of neural machine translation (NMT) models in natural language processing (NLP), researchers have tried to build neural decompilers by borrowing the idea of NMT. They formulate the decompilation process as a translation problem between LPL and HPL, aiming to reduce the human cost required to develop decompilation tools and improve their generalizability. However, state-of-the-art learning-based decompilers do not cope well with compiler-optimized binaries. Since real-world binaries are mostly compiler-optimized, decompilers that do not consider optimized binaries have limited practical significance. In this paper, we propose a novel learning-based approach named NeurDP, that targets compiler-optimized binaries. NeurDP uses a graph neural network (GNN) model to convert LPL to an intermediate representation (IR), which bridges the gap between source code and optimized binary. We also design an Optimized Translation Unit (OTU) to split functions into smaller code fragments for better translation performance. Evaluation results on datasets containing various types of statements show that NeurDP can decompile optimized binaries with 45.21% higher accuracy than state-of-the-art neural decompilation frameworks.
translated by 谷歌翻译
Nearest-Neighbor (NN) classification has been proven as a simple and effective approach for few-shot learning. The query data can be classified efficiently by finding the nearest support class based on features extracted by pretrained deep models. However, NN-based methods are sensitive to the data distribution and may produce false prediction if the samples in the support set happen to lie around the distribution boundary of different classes. To solve this issue, we present P3DC-Shot, an improved nearest-neighbor based few-shot classification method empowered by prior-driven data calibration. Inspired by the distribution calibration technique which utilizes the distribution or statistics of the base classes to calibrate the data for few-shot tasks, we propose a novel discrete data calibration operation which is more suitable for NN-based few-shot classification. Specifically, we treat the prototypes representing each base class as priors and calibrate each support data based on its similarity to different base prototypes. Then, we perform NN classification using these discretely calibrated support data. Results from extensive experiments on various datasets show our efficient non-learning based method can outperform or at least comparable to SOTA methods which need additional learning steps.
translated by 谷歌翻译
In recent years, arbitrary image style transfer has attracted more and more attention. Given a pair of content and style images, a stylized one is hoped that retains the content from the former while catching style patterns from the latter. However, it is difficult to simultaneously keep well the trade-off between the content details and the style features. To stylize the image with sufficient style patterns, the content details may be damaged and sometimes the objects of images can not be distinguished clearly. For this reason, we present a new transformer-based method named STT for image style transfer and an edge loss which can enhance the content details apparently to avoid generating blurred results for excessive rendering on style features. Qualitative and quantitative experiments demonstrate that STT achieves comparable performance to state-of-the-art image style transfer methods while alleviating the content leak problem.
translated by 谷歌翻译